47 research outputs found

    Explanation Trees for Causal Bayesian Networks

    Get PDF
    Bayesian networks can be used to extract explanations about the observed state of a subset of variables. In this paper, we ex- plicate the desiderata of an explanation and confront them with the concept of expla- nation proposed by existing methods. The necessity of taking into account causal ap- proaches when a causal graph is available is discussed. We then introduce causal expla- nation trees, based on the construction of ex- planation trees using the measure of causal information flow (Ay and Polani, 2006). This approach is compared to several other meth- ods on known networks

    Risque garanti pour les modèles de discrimination multi-classes

    Get PDF
    Colloque avec actes et comité de lecture.Nous étudions les performances en généralisation des systèmes de discrimination à catégories multiples. Nous établissons deux bornes sur ces performances, en fonction de deux mesures de capacité de la famille de fonctions calculées : la fonction de croissance et les nombres de couverture. Ces bornes sont évaluées sur un modèle de combinaison de classifieurs estimant les probabilités a posteriori des classes. Ceci permet de comparer l'adéquation des deux mesures de capacité

    Algorithmic Stability and Generalization Performance

    No full text
    We present a novel way of obtaining PAC-style bounds on the generalization error of learning algorithms, explicitly using their stability properties. A stable learner is one for which the learned solution does not change much with small changes in the training set. The bounds we obtain do not depend on any measure of the complexity of the hypothesis space (e.g. VC dimension) but rather depend on how the learning algorithm searches this space, and can thus be applied even when the VC dimension is infinite. We demonstrate that regularization networks possess the required stability property and apply our method to obtain new bounds on their generalization performance

    A kernel method for multilabelled classification

    No full text
    This article presents a Support Vector Machine (SVM) like learning system to handle multi-label problems. Such problems are usually decomposed into many two-class problems but the expressive power of such a system can be weak [5, 7]. We explore a new direct approach. It is based on a large margin ranking system that shares a lot of common properties with SVMs. We tested it on a Yeast gene functional classification problem with positive results.
    corecore